Goto

Collaborating Authors

 landing strategy


Assessing Wind Impact on Semi-Autonomous Drone Landings for In-Contact Power Line Inspection

Gendron, Etienne, Leclerc, Marc-Antoine, Hovington, Samuel, Perron, Etienne, Rancourt, David, Lussier-Desbiens, Alexis, Hamelin, Philippe, Girard, Alexandre

arXiv.org Artificial Intelligence

In recent years, the use of inspection drones has become increasingly popular for high-voltage electric cable inspections due to their efficiency, cost-effectiveness, and ability to access hard-to-reach areas. However, safely landing drones on power lines, especially under windy conditions, remains a significant challenge. This study introduces a semi-autonomous control scheme for landing on an electrical line with the NADILE drone (an experimental drone based on original LineDrone key features for inspection of power lines) and assesses the operating envelope under various wind conditions. A Monte Carlo method is employed to analyze the success probability of landing given initial drone states. The performance of the system is evaluated for two landing strategies, variously controllers parameters and four level of wind intensities. The results show that a two-stage landing strategies offers higher probabilities of landing success and give insight regarding the best controller parameters and the maximum wind level for which the system is robust. Lastly, an experimental demonstration of the system landing autonomously on a power line is presented.


Inverted Landing in a Small Aerial Robot via Deep Reinforcement Learning for Triggering and Control of Rotational Maneuvers

Habas, Bryan, Langelaan, Jack W., Cheng, Bo

arXiv.org Artificial Intelligence

Inverted landing in a rapid and robust manner is a challenging feat for aerial robots, especially while depending entirely on onboard sensing and computation. In spite of this, this feat is routinely performed by biological fliers such as bats, flies, and bees. Our previous work has identified a direct causal connection between a series of onboard visual cues and kinematic actions that allow for reliable execution of this challenging aerobatic maneuver in small aerial robots. In this work, we first utilized Deep Reinforcement Learning and a physics-based simulation to obtain a general, optimal control policy for robust inverted landing starting from any arbitrary approach condition. This optimized control policy provides a computationally-efficient mapping from the system's observational space to its motor command action space, including both triggering and control of rotational maneuvers. This was done by training the system over a large range of approach flight velocities that varied with magnitude and direction. Next, we performed a sim-to-real transfer and experimental validation of the learned policy via domain randomization, by varying the robot's inertial parameters in the simulation. Through experimental trials, we identified several dominant factors which greatly improved landing robustness and the primary mechanisms that determined inverted landing success. We expect the learning framework developed in this study can be generalized to solve more challenging tasks, such as utilizing noisy onboard sensory data, landing on surfaces of various orientations, or landing on dynamically-moving surfaces.


Adapting Neural Models with Sequential Monte Carlo Dropout

Carreno-Medrano, Pamela, Kulić, Dana, Burke, Michael

arXiv.org Artificial Intelligence

Neural models and policies are now ubiquitous in modern robotics. The prevailing approach to training these follows a two stage process - a large, comprehensive collection of data (often state and action pairs) is used to train a suitable model or policy, which is then frozen and deployed. Unfortunately, this results in models that are unable to adapt to changes in the environment, which is a particular concern in robotics. For example, it would be preferable for a robot dynamics model to handle context dependent kinematic or dynamic properties, or a collaborative robot relying on predictions of human behaviour to adapt to different human abilities or preferences. Many existing adaptive control techniques [1] attempting to tackle this problem rely on carefully considered parametric models, but these may lack the requisite capacity for prediction that is typically associated with neural models. In contrast, meta-learning and adaptive neural control approaches addressing this problem are often quite cumbersome to train and implement. This paper introduces a simple and effective approach to achieve adaptation for neural network models.